car-following event
EcoFollower: An Environment-Friendly Car Following Model Considering Fuel Consumption
Zhong, Hui, Chen, Xianda, Tiu, PakHin, Lu, Hongliang, Zhu, Meixin
To alleviate energy shortages and environmental impacts caused by transportation, this study introduces EcoFollower, a novel eco-car-following model developed using reinforcement learning (RL) to optimize fuel consumption in car-following scenarios. Employing the NGSIM datasets, the performance of EcoFollower was assessed in comparison with the well-established Intelligent Driver Model (IDM). The findings demonstrate that EcoFollower excels in simulating realistic driving behaviors, maintaining smooth vehicle operations, and closely matching the ground truth metrics of time-to-collision (TTC), headway, and comfort. Notably, the model achieved a significant reduction in fuel consumption, lowering it by 10.42\% compared to actual driving scenarios. These results underscore the capability of RL-based models like EcoFollower to enhance autonomous vehicle algorithms, promoting safer and more energy-efficient driving strategies.
- Asia > China > Guangdong Province > Guangzhou (0.05)
- North America > United States (0.04)
- Asia > China > Hong Kong (0.04)
- Transportation > Ground > Road (1.00)
- Energy (1.00)
- Automobiles & Trucks (1.00)
FollowNet: A Comprehensive Benchmark for Car-Following Behavior Modeling
Chen, Xianda, Zhu, Meixin, Chen, Kehua, Wang, Pengqin, Lu, Hongliang, Zhong, Hui, Han, Xu, Wang, Yinhai
Car-following is a control process in which a following vehicle (FV) adjusts its acceleration to keep a safe distance from the lead vehicle (LV). Recently, there has been a booming of data-driven models that enable more accurate modeling of car-following through real-world driving datasets. Although there are several public datasets available, their formats are not always consistent, making it challenging to determine the state-of-the-art models and how well a new model performs compared to existing ones. In contrast, research fields such as image recognition and object detection have benchmark datasets like ImageNet, Microsoft COCO, and KITTI. To address this gap and promote the development of microscopic traffic flow modeling, we establish a public benchmark dataset for car-following behavior modeling. The benchmark consists of more than 80K car-following events extracted from five public driving datasets using the same criteria. These events cover diverse situations including different road types, various weather conditions, and mixed traffic flows with autonomous vehicles. Moreover, to give an overview of current progress in car-following modeling, we implemented and tested representative baseline models with the benchmark. Results show that the deep deterministic policy gradient (DDPG) based model performs competitively with a lower MSE for spacing compared to traditional intelligent driver model (IDM) and Gazis-Herman-Rothery (GHR) models, and a smaller collision rate compared to fully connected neural network (NN) and long short-term memory (LSTM) models in most datasets. The established benchmark will provide researchers with consistent data formats and metrics for cross-comparing different car-following models, promoting the development of more accurate models. We open-source our dataset and implementation code in https://github.com/HKUST-DRIVE-AI-LAB/FollowNet.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > China > Guangdong Province > Guangzhou (0.05)
- (7 more...)
- Overview (1.00)
- Research Report > New Finding (0.66)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
- Information Technology (0.69)
- Government > Regional Government > North America Government > United States Government (0.46)
TransFollower: Long-Sequence Car-Following Trajectory Prediction through Transformer
Zhu, Meixin, Du, Simon S., Wang, Xuesong, Hao, null, Yang, null, Pu, Ziyuan, Wang, Yinhai
Car-following refers to a control process in which the following vehicle (FV) tries to keep a safe distance between itself and the lead vehicle (LV) by adjusting its acceleration in response to the actions of the vehicle ahead. The corresponding car-following models, which describe how one vehicle follows another vehicle in the traffic flow, form the cornerstone for microscopic traffic simulation and intelligent vehicle development. One major motivation of car-following models is to replicate human drivers' longitudinal driving trajectories. To model the long-term dependency of future actions on historical driving situations, we developed a long-sequence car-following trajectory prediction model based on the attention-based Transformer model. The model follows a general format of encoder-decoder architecture. The encoder takes historical speed and spacing data as inputs and forms a mixed representation of historical driving context using multi-head self-attention. The decoder takes the future LV speed profile as input and outputs the predicted future FV speed profile in a generative way (instead of an auto-regressive way, avoiding compounding errors). Through cross-attention between encoder and decoder, the decoder learns to build a connection between historical driving and future LV speed, based on which a prediction of future FV speed can be obtained. We train and test our model with 112,597 real-world car-following events extracted from the Shanghai Naturalistic Driving Study (SH-NDS). Results show that the model outperforms the traditional intelligent driver model (IDM), a fully connected neural network model, and a long short-term memory (LSTM) based model in terms of long-sequence trajectory prediction accuracy. We also visualized the self-attention and cross-attention heatmaps to explain how the model derives its predictions.
- Asia > China > Shanghai > Shanghai (0.25)
- North America > United States > Virginia (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.46)
Safe, Efficient, and Comfortable Velocity Control based on Reinforcement Learning for Autonomous Driving
Zhu, Meixin, Wang, Yinhai, Hu, Jingyun, Wang, Xuesong, Ke, Ruimin
A model used for velocity control during car following was proposed based on deep reinforcement learning (RL). To fulfil the multi-objectives of car following, a reward function reflecting driving safety, efficiency, and comfort was constructed. With the reward function, the RL agent learns to control vehicle speed in a fashion that maximizes cumulative rewards, through trials and errors in the simulation environment. A total of 1,341 car-following events extracted from the Next Generation Simulation (NGSIM) dataset were used to train the model. Car-following behavior produced by the model were compared with that observed in the empirical NGSIM data, to demonstrate the model's ability to follow a lead vehicle safely, efficiently, and comfortably. Results show that the model demonstrates the capability of safe, efficient, and comfortable velocity control in that it 1) has small percentages (8\%) of dangerous minimum time to collision values (\textless\ 5s) than human drivers in the NGSIM data (35\%); 2) can maintain efficient and safe headways in the range of 1s to 2s; and 3) can follow the lead vehicle comfortably with smooth acceleration. The results indicate that reinforcement learning methods could contribute to the development of autonomous driving systems.
- North America > United States > Florida > Orange County > Orlando (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Asia > China > Shanghai > Shanghai (0.05)
- (6 more...)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)